End-to-end speech recognition models trained using joint Connectionist Temporal Classification (CTC)-Attention loss have gained popularity recently. In these models, a non-autoregressive CTC decoder is often used at inference time due to its speed and simplicity. However, such models are hard to personalize because of their conditional independence assumption that prevents output tokens from previous time steps to influence future predictions. To tackle this, we propose a novel two-way approach that first biases the encoder with attention over a predefined list of rare long-tail and out-of-vocabulary (OOV) words and then uses dynamic boosting and phone alignment network during decoding to further bias the subword predictions. We evaluate our approach on open-source VoxPopuli and in-house medical datasets to showcase a 60% improvement in F1 score on domain-specific rare words over a strong CTC baseline.
translated by 谷歌翻译
Detecting anomalous data within time series is a very relevant task in pattern recognition and machine learning, with many possible applications that range from disease prevention in medicine, e.g., detecting early alterations of the health status before it can clearly be defined as "illness" up to monitoring industrial plants. Regarding this latter application, detecting anomalies in an industrial plant's status firstly prevents serious damages that would require a long interruption of the production process. Secondly, it permits optimal scheduling of maintenance interventions by limiting them to urgent situations. At the same time, they typically follow a fixed prudential schedule according to which components are substituted well before the end of their expected lifetime. This paper describes a case study regarding the monitoring of the status of Laser-guided Vehicles (LGVs) batteries, on which we worked as our contribution to project SUPER (Supercomputing Unified Platform, Emilia Romagna) aimed at establishing and demonstrating a regional High-Performance Computing platform that is going to represent the main Italian supercomputing environment for both computing power and data volume.
translated by 谷歌翻译
This paper is about the design of an automated machine to cut turbot fish specimens. Machine vision is a key part of this project as it is used to compute a cutting curve for the specimen head. This task is impossible to be carried out by mechanical means. Machine vision is used to detect head boundary and a robot is used to cut the head. Binarization and mathematical morphology are used to detect fish boundary and this boundary is subsequently analyzed (using Hough transform and convex hull) to detect key points and thus defining the cutting curve. Afterwards, mechanical systems are used to slice fish to get an easy presentation for end consumer (as fish fillets than can be easily marketed and consumed).
translated by 谷歌翻译
The unfolding of detector effects is crucial for the comparison of data to theory predictions. While traditional methods are limited to representing the data in a low number of dimensions, machine learning has enabled new unfolding techniques while retaining the full dimensionality. Generative networks like invertible neural networks~(INN) enable a probabilistic unfolding, which map individual events to their corresponding unfolded probability distribution. The accuracy of such methods is however limited by how well simulated training samples model the actual data that is unfolded. We introduce the iterative conditional INN~(IcINN) for unfolding that adjusts for deviations between simulated training samples and data. The IcINN unfolding is first validated on toy data and then applied to pseudo-data for the $pp \to Z \gamma \gamma$ process.
translated by 谷歌翻译
In this paper, we consider incorporating data associated with the sun's north and south polar field strengths to improve solar flare prediction performance using machine learning models. When used to supplement local data from active regions on the photospheric magnetic field of the sun, the polar field data provides global information to the predictor. While such global features have been previously proposed for predicting the next solar cycle's intensity, in this paper we propose using them to help classify individual solar flares. We conduct experiments using HMI data employing four different machine learning algorithms that can exploit polar field information. Additionally, we propose a novel probabilistic mixture of experts model that can simply and effectively incorporate polar field data and provide on-par prediction performance with state-of-the-art solar flare prediction algorithms such as the Recurrent Neural Network (RNN). Our experimental results indicate the usefulness of the polar field data for solar flare prediction, which can improve Heidke Skill Score (HSS2) by as much as 10.1%.
translated by 谷歌翻译
Predicting drug side-effects before they occur is a key task in keeping the number of drug-related hospitalizations low and to improve drug discovery processes. Automatic predictors of side-effects generally are not able to process the structure of the drug, resulting in a loss of information. Graph neural networks have seen great success in recent years, thanks to their ability of exploiting the information conveyed by the graph structure and labels. These models have been used in a wide variety of biological applications, among which the prediction of drug side-effects on a large knowledge graph. Exploiting the molecular graph encoding the structure of the drug represents a novel approach, in which the problem is formulated as a multi-class multi-label graph-focused classification. We developed a methodology to carry out this task, using recurrent Graph Neural Networks, and building a dataset from freely accessible and well established data sources. The results show that our method has an improved classification capability, under many parameters and metrics, with respect to previously available predictors.
translated by 谷歌翻译
自动驾驶汽车必须能够可靠地处理不利的天气条件(例如,雪地)安全运行。在本文中,我们研究了以不利条件捕获的转动传感器输入(即图像)的想法,将其下游任务(例如,语义分割)可以达到高精度。先前的工作主要将其作为未配对的图像到图像翻译问题,因为缺乏在完全相同的相机姿势和语义布局下捕获的配对图像。虽然没有完美对准的图像,但可以轻松获得粗配上的图像。例如,许多人每天在好天气和不利的天气中驾驶相同的路线;因此,在近距离GPS位置捕获的图像可以形成一对。尽管来自重复遍历的数据不太可能捕获相同的前景对象,但我们认为它们提供了丰富的上下文信息来监督图像翻译模型。为此,我们提出了一个新颖的训练目标,利用了粗糙的图像对。我们表明,我们与一致的训练方案可提高更好的图像翻译质量和改进的下游任务,例如语义分割,单眼深度估计和视觉定位。
translated by 谷歌翻译
最近,已经努力将信号阶段和时机(SPAT)消息标准化。这些消息包含所有信号交叉方法的信号相时机。因此,这些信息可用于有效的运动计划,从而导致更多均匀的交通流量和均匀的速度轮廓。尽管努力为半活化的信号控制系统提供了可靠的预测,但预测完全驱动控制的信号相时仍具有挑战性。本文提出了使用聚合的流量信号和循环检测器数据的时间序列预测框架。我们利用最先进的机器学习模型来预测未来信号阶段的持续时间。线性回归(LR),随机森林(RF)和长期内存(LSTM)神经网络的性能是针对天真基线模型进行评估的。结果基于瑞士苏黎世的全面信号控制系统的经验数据集表明,机器学习模型的表现优于常规预测方法。此外,基于树木的决策模型(例如RF)的表现最佳,其准确性满足实用应用要求。
translated by 谷歌翻译
卷积神经网络(CNN)在许多计算机视觉任务(例如图像分类和对象检测)中取得了巨大的成功。但是,他们的性能在更艰巨的任务上迅速降低,因为图像是低分辨率或物体很小的。在本文中,我们指出,这根源于现有CNN体系结构中的有缺陷但常见的设计,即使用稳固的卷积和/或汇总层,这导致丢失细粒度的信息和学习较低有效的功能表示形式。为此,我们提出了一个新的CNN构建块,称为SPD-CONV,代替每个稳定的卷积层和每个池层(从而完全消除它们)。 SPD-CONV由一个对深度(SPD)层的组成,然后是非构造卷积(CORV)层,并且可以在大多数(如果不是全部)CNN体系结构中应用。我们在两个最具代表性的计算机视觉任务下解释了这种新设计:对象检测和图像分类。然后,我们通过将SPD-CONV应用于Yolov5和Resnet来创建新的CNN体​​系结构,并从经验上表明,我们的方法显着优于最先进的深度学习模型,尤其是在具有低分辨率图像和小物体的更艰巨的任务上。我们已经在https://github.com/labsaint/spd-conv上开源代码。
translated by 谷歌翻译
由于大规模数据集的可用性,通常在特定位置和良好的天气条件下收集的大规模数据集,近年来,自动驾驶汽车的感知进展已加速。然而,为了达到高安全要求,这些感知系统必须在包括雪和雨在内的各种天气条件下进行稳健运行。在本文中,我们提出了一个新数据集,以通过新颖的数据收集过程启用强大的自动驾驶 - 在不同场景(Urban,Highway,乡村,校园),天气,雪,雨,阳光下,沿着15公里的路线反复记录数据),时间(白天/晚上)以及交通状况(行人,骑自行车的人和汽车)。该数据集包括来自摄像机和激光雷达传感器的图像和点云,以及高精度GPS/ins以在跨路线上建立对应关系。该数据集包括使用Amodal掩码捕获部分遮挡和3D边界框的道路和对象注释。我们通过分析基准在道路和对象,深度估计和3D对象检测中的性能来证明该数据集的独特性。重复的路线为对象发现,持续学习和异常检测打开了新的研究方向。链接到ITHACA365:https://ithaca365.mae.cornell.edu/
translated by 谷歌翻译